Goto

Collaborating Authors

 pressure sensor



Underwater Visual-Inertial-Acoustic-Depth SLAM with DVL Preintegration for Degraded Environments

Ding, Shuoshuo, Zhang, Tiedong, Jiang, Dapeng, Lei, Ming

arXiv.org Artificial Intelligence

Abstract--Visual degradation caused by limited visibility, insufficient lighting, and feature scarcity in underwater environments presents significant challenges to visual-inertial simultaneous localization and mapping (SLAM) systems. The key innovation lies in the tight integration of four distinct sensor modalities to ensure reliable operation, even under degraded visual conditions. To mitigate DVL drift and improve measurement efficiency, we propose a novel velocity-bias-based DVL preintegration strategy. At the frontend, hybrid tracking strategies and acoustic-inertial-depth joint optimization enhance system stability. Additionally, multi-source hybrid residuals are incorporated into a graph optimization framework. Extensive quantitative and qualitative analyses of the proposed system are conducted in both simulated and real-world underwater scenarios. The results demonstrate that our approach outperforms current state-of-the-art stereo visual-inertial SLAM systems in both stability and localization accuracy, exhibiting exceptional robustness, particularly in visually challenging environments. UMAN activities in the fields of ocean engineering and marine science are increasing steadily, encompassing scientific expeditions to study underwater hydrothermal vents and archaeological sites, inspections and maintenance of subsea pipelines and reservoirs, and salvage operations for wrecked aircraft and vessels. Shuoshuo Ding, Tiedong Zhang and Dapeng Jiang are with School of Ocean Engineering and T echnology & Southern Marine science and Engineering Guangdong Laboratory (Zhuhai), Sun Y at-sen University, Zhuhai 519082, China, with Guangdong Provincial Key Laboratory of Information T echnology for Deep Water Acoustics, Zhuhai 519082, China, and also with Key Laboratory of Comprehensive Observation of Polar Environment (Sun Y at-sen University), Ministry of Education, Zhuhai 519082, China (e-mail: dingshsh5@mail2.sysu.edu.cn,


Efficient Force and Stiffness Prediction in Robotic Produce Handling with a Piezoresistive Pressure Sensor

Fairchild, Preston, Chen, Claudia, Tan, Xiaobo

arXiv.org Artificial Intelligence

Abstract: Properly handling del i cate produce with robotic manipulators is a major part of the future role of automation in agricultural harvesting and processing . Grasping with the correct amount of force is crucial in not only ensuring proper grip on the object, but also to avoid damaging or bruising the product . In this work, a flexible pressure sensor that is both low cost and easy to fabricate is integrated with robotic grippers for work ing with produce of varying shape s, sizes, and stiffness es . The sensor is successfully integrated with both a rigid robotic gripper, as well as a pneumatically actuated soft finger. Furthermore, an algorithm is proposed for acce lerated estimation of the steady - state value of the sensor output based on the transient response data, to enable real - time applications. The sensor is shown to be effective in incorporating feedback to correctly grasp objects of unknown sizes and stiffnesses . At the same time, the sensor provid es estimates for these values which can be utilized for identification of qualities such as ripeness levels and bruising . It is also shown to be able to provide force feedback for objects of variable stiffness es . Th is enables future use not only for produce identification, but also for tasks such as quality control and selective distribution based on ripeness levels . Keywords: Robotics, sensing, p roduce handling, grasping Highlights: Low - cost and easy - to - fabricate sensor for easy implementation with a variety of robotic grippers Fast estimation of settled resistance using exponential decay curve fit Measurements of grasping force and stiffness of a held object V arious produce handling features such as ripeness monitoring, bruising detection, and size estimation 1. Introduction: The use of robotic end - effectors for securely grasping objects is a pivotal component in manipulation tasks .


M3D-skin: Multi-material 3D-printed Tactile Sensor with Hierarchical Infill Structures for Pressure Sensing

Yoshimura, Shunnosuke, Kawaharazuka, Kento, Okada, Kei

arXiv.org Artificial Intelligence

Tactile sensors have a wide range of applications, from utilization in robotic grippers to human motion measurement. If tactile sensors could be fabricated and integrated more easily, their applicability would further expand. In this study, we propose a tactile sensor-M3D-skin-that can be easily fabricated with high versatility by leveraging the infill patterns of a multi-material fused deposition modeling (FDM) 3D printer as the sensing principle. This method employs conductive and non-conductive flexible filaments to create a hierarchical structure with a specific infill pattern. The flexible hierarchical structure deforms under pressure, leading to a change in electrical resistance, enabling the acquisition of tactile information. We measure the changes in characteristics of the proposed tactile sensor caused by modifications to the hierarchical structure. Additionally, we demonstrate the fabrication and use of a multi-tile sensor. Furthermore, as applications, we implement motion pattern measurement on the sole of a foot, integration with a robotic hand, and tactile-based robotic operations. Through these experiments, we validate the effectiveness of the proposed tactile sensor.



A control scheme for collaborative object transportation between a human and a quadruped robot using the MIGHTY suction cup

Plotas, Konstantinos, Papadakis, Emmanouil, Drosakis, Drosakis, Trahanias, Panos, Papageorgiou, Dimitrios

arXiv.org Artificial Intelligence

Please find the citation info @ Zenodo, as the proceedings of ICRA are no longer sent to IEEE Xplore. This is a pre-print version of the paper presented at IEEE International Conference on Robotics and Automation 2025 (ICRA), Atlanta, US. Abstract -- In this work, a control scheme for human-robot collaborative object transportation is proposed, considering a quadruped robot equipped with the MIGHTY suction cup that serves both as a gripper for holding the object and a force/torque sensor . The proposed control scheme is based on the notion of admittance control, and incorporates a variable damping term aiming towards increasing the controllability of the human and, at the same time, decreasing her/his effort. Furthermore, to ensure that the object is not detached from the suction cup during the collaboration, an additional control signal is proposed, which is based on a barrier artificial potential. The proposed control scheme is proven to be passive and its performance is demonstrated through experimental evaluations conducted using the Unitree Go1 robot equipped with the MIGHTY suction cup.


Aucamp: An Underwater Camera-Based Multi-Robot Platform with Low-Cost, Distributed, and Robust Localization

Xu, Jisheng, Lin, Ding, Fong, Pangkit, Fang, Chongrong, Duan, Xiaoming, He, Jianping

arXiv.org Artificial Intelligence

This paper introduces an underwater multi-robot platform, named Aucamp, characterized by cost-effective monocular-camera-based sensing, distributed protocol and robust orientation control for localization. We utilize the clarity feature to measure the distance, present the monocular imaging model, and estimate the position of the target object. We achieve global positioning in our platform by designing a distributed update protocol. The distributed algorithm enables the perception process to simultaneously cover a broader range, and greatly improves the accuracy and robustness of the positioning. Moreover, the explicit dynamics model of the robot in our platform is obtained, based on which, we propose a robust orientation control framework. The control system ensures that the platform maintains a balanced posture for each robot, thereby ensuring the stability of the localization system. The platform can swiftly recover from an forced unstable state to a stable horizontal posture. Additionally, we conduct extensive experiments and application scenarios to evaluate the performance of our platform. The proposed new platform may provide support for extensive marine exploration by underwater sensor networks.


P2P-Insole: Human Pose Estimation Using Foot Pressure Distribution and Motion Sensors

Watanabe, Atsuya, Aisuwarya, Ratna, Jing, Lei

arXiv.org Artificial Intelligence

This work presents P2P-Insole, a low-cost approach for estimating and visualizing 3D human skeletal data using insole-type sensors integrated with IMUs. Each insole, fabricated with e-textile garment techniques, costs under USD 1, making it significantly cheaper than commercial alternatives and ideal for large-scale production. Our approach uses foot pressure distribution, acceleration, and rotation data to overcome limitations, providing a lightweight, minimally intrusive, and privacy-aware solution. The system employs a Transformer model for efficient temporal feature extraction, enriched by first and second derivatives in the input stream. Including multimodal information, such as accelerometers and rotational measurements, improves the accuracy of complex motion pattern recognition. These facts are demonstrated experimentally, while error metrics show the robustness of the approach in various posture estimation tasks. This work could be the foundation for a low-cost, practical application in rehabilitation, injury prevention, and health monitoring while enabling further development through sensor optimization and expanded datasets.


A Vision-Enabled Prosthetic Hand for Children with Upper Limb Disabilities

Sarker, Md Abdul Baset, Nguyen, Art, Kukla, Sigmond, Fite, Kevin, Imtiaz, Masudul H.

arXiv.org Artificial Intelligence

Digital Object Identifier preprint A Vision-Enabled Prosthetic Hand for Children with Upper Limb Disabilities MD ABDUL BASET SARKER 1, ART NGUYEN 2, SIGMOND KUKLA 3, KEVIN FITE 4, MASUDUL H. IMTIAZ 5 1 Md Abdul Baset Sarker is with Clarkson University, Potsdam, NY-13699, USA (e-mail: sarkerm@clarkson.edu). 2 Art Nguyen is with Clarkson University, Potsdam, NY-13699, USA (e-mail: nguyenqp@clarkson.edu). Paper submission date Mar 31, 2025' This work was supported in part by Clarkson University, Potsdam, NYABSTRACT This paper introduces a novel AI vision-enabled pediatric prosthetic hand designed to assist children aged 10-12 with upper limb disabilities. The prosthesis features an anthropomorphic appearance, multi-articulating functionality, and a lightweight design that mimics a natural hand, making it both accessible and affordable for low-income families. Using 3D printing technology and integrating advanced machine vision, sensing, and embedded computing, the prosthetic hand offers a low-cost, customizable solution that addresses the limitations of current myoelectric prostheses. A micro camera is interfaced with a low-power FPGA for real-time object detection and assists with precise grasping. The onboard DL-based object detection and grasp classification models achieved accuracies of 96% and 100% respectively. In the force prediction, the mean absolute error was found to be 0.018. The features of the proposed prosthetic hand can thus be summarized as: a) a wrist-mounted micro camera for artificial sensing, enabling a wide range of hand-based tasks; b) real-time object detection and distance estimation for precise grasping; and c) ultra-low-power operation that delivers high performance within constrained power and resource limits.INDEX TERMS artificial intelligence, prosthetic hand, rehabilitation, vision I. INTRODUCTION C ONGENITAL limb loss and upper extremity abnormalities were estimated to occur in approximately 15 individuals per 100,000 live births in the United States alone [1], [2]. Beyond congenital disabilities, tumors, severe infections, or traumatic injuries also cause pediatric limb deficiency and place a significant physical and emotional burden on a child and their family. Replacement of an upper limb with a functional prosthetic hand had the potential to restore some limb functionality and improve the independence of these children. Furthermore, the earlier children were fitted for a powered prosthesis, the lower the rate of prosthetic hand rejection in the later years of their life [3].


Learning-Based Leader Localization for Underwater Vehicles With Optical-Acoustic-Pressure Sensor Fusion

Yang, Mingyang, Sha, Zeyu, Zhang, Feitian

arXiv.org Artificial Intelligence

Underwater vehicles have emerged as a critical technology for exploring and monitoring aquatic environments. The deployment of multi-vehicle systems has gained substantial interest due to their capability to perform collaborative tasks with improved efficiency. However, achieving precise localization of a leader underwater vehicle within a multi-vehicle configuration remains a significant challenge, particularly in dynamic and complex underwater conditions. To address this issue, this paper presents a novel tri-modal sensor fusion neural network approach that integrates optical, acoustic, and pressure sensors to localize the leader vehicle. The proposed method leverages the unique strengths of each sensor modality to improve localization accuracy and robustness. Specifically, optical sensors provide high-resolution imaging for precise relative positioning, acoustic sensors enable long-range detection and ranging, and pressure sensors offer environmental context awareness. The fusion of these sensor modalities is implemented using a deep learning architecture designed to extract and combine complementary features from raw sensor data. The effectiveness of the proposed method is validated through a custom-designed testing platform. Extensive data collection and experimental evaluations demonstrate that the tri-modal approach significantly improves the accuracy and robustness of leader localization, outperforming both single-modal and dual-modal methods.